Deploying Eggplant DAI in Containers
Eggplant DAI can be installed on Kubernetes using Helm. You will need to meet the following requirements:
Requirement | Notes |
---|---|
Kubernetes cluster | Tested version 1.24. |
ingress-nginx | Tested version 1.3.0 (chart version 4.2.1). |
Keda v2 | Optional, for autoscaling engines. Tested version 2.7. |
Eggplant DAI license | Speak to support if needed. |
When you've met those requirements, you can install the default Eggplant DAI deployment by creating a Helm values file. In the example below, substitute all the values to work for your own deployment.
global:
postgresql:
auth:
postgresPassword: postgres
ingress:
host: dai.example.com
keycloak:
host: dai.example.com
devLicense: a-real-license-goes-here
execLicense: a-real-license-goes-here
objectStorage:
minio:
rootUser: "eggplant"
rootPassword: "eggplant"
keycloak:
externalDatabase:
# This must match the value of global.postgresql.auth.postgresPassword
password: postgres
keycloak-user-provisioner:
adminUsers:
daiAdmin:
username: admin-username
email: admin-email
password: admin-password
A few notes:
global.ingress.host
andglobal.keycloak.host
do not have to be the same domain, but they do have to be resolvable. You can do this either by having something like ExternalDNS deployed on your cluster or manually creating the records and pointing them at your cluster.keycloak-user-provisioner.adminUsers.daiAdmin.password
must be at least 12 characters long. You can add additional admin users by adding extra keys underkeycloak-user-provisioner.adminUsers
.
Full documentation for all values can be found here.
Then deploy it to your Kubernetes cluster:
$ helm upgrade --install \ --namespace dai \ --create-namespace \ dai \ dai \ --repo oci://harbor.dai.eggplant.cloud/charts/dai \ --version 1.9.5 \ --values dai.yaml \ --wait Release "dai" does not exist. Installing it now. NAME: dai LAST DEPLOYED: Fri Feb 17 08:20:17 2023 NAMESPACE: dai STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Thank you for installing dai.
Your release is named dai.
To learn more about the release, try:
$ helm status dai $ helm get all dai
You can access your DAI instance by visiting dai.example.com.
admin username: admin-username admin password: admin-password
In this installation, required third-party dependencies are installed and configured automatically using Bitnami Helm charts. These dependencies are:
Dependency | Tested Chart Version | Tested App Version |
---|---|---|
RabbitMQ | 11.13.0 | 3.11.13 |
PostgreSQL | 11.9.13 | 14.7.0 |
MinIO | 12.2.6 | 2022.2.5 |
Keycloak | 10.1.6 | 19.0.3 |
The Helm chart installs these dependencies, but it doesn't manage backups of data stored in PostgreSQL or MinIO. You need to arrange backups of these services in a production deployment as part of your disaster recovery plan. There's an example of one approach to backups later in this documentation.
Supported customisations
In the default installation above, all dependencies are deployed to Kubernetes with data stored in persistent volumes for PostgreSQL and MinIO. If you have an existing solution in place for PostgreSQL or AWS S3 compatible object storage you want to use instead, you can customise the Eggplant DAI installation to use these. Further, you may want to pass credentials using Kubernetes secrets rather than in the values file for improved security.
This section of the documentation gives examples showing how to customise your installation. All examples will use secrets for credentials. All the examples given are just snippets that are meant to be added to the default installation values demonstrated above.
Object storage configuration
Eggplant DAI depends on an S3 compatible object storage solution for persisting assets such as test screenshots. The Helm chart gives several options for configuring this.
Bundled MinIO (Default)
By default, the Eggplant DAI Helm chart deploys MinIO as a sub-chart with a random root-user and root-password.
You can override these random values by providing an existing secret. First, prepare an existing secret with credentials in:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: dai-objectstorage
stringData:
root-user: username
root-password: password
$ kubectl -n dai apply -f dai-objectstorage.yaml
Then update your values file to point to the existing secret and run Helm upgrade:
global:
objectStorage:
minio:
existingSecret: dai-objectstorage
minio:
auth:
existingSecret: dai-objectstorage
$ helm upgrade dai oci://harbor.dai.eggplant.cloud/charts/dai --version 1.9.5 -f dai.yaml --wait
Note global.objectStorage.minio.existingSecret
and minio.auth.existingSecret
must match.
You can further customise your MinIO installation by passing values under the minio
key in your values file. MinIO installation is provided by the Bitnami chart, so please refer to their documentation for available options.
Changes to the MinIO configuration will not be supported by Eggplant.
Existing MinIO
If you have an existing MinIO installation you can use this instead as follows, using the same secret created above.
global:
objectStorage:
minio:
existingSecret: dai-objectstorage
endpoint: my.minio.deployment.example.com
minio:
enabled: false
Note the minio
key setting enabled
to false
. This prevents the bundled MinIO from being deployed.
Eggplant cannot provide support for MinIO installations external to your DAI installation.
S3
AWS S3 can be configured for object storage with an existing secret as follows. First, prepare an existing secret with credentials in:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: dai-objectstorage
stringData:
aws-access-key-id: my-access-key-id
aws-secret-access-key: my-secret-access-key
$ kubectl -n dai apply -f dai-objectstorage.yaml
Modify your values file to update or add the following keys as follows:
global:
objectStorage:
provider: "aws"
aws:
existingSecret: dai-objectstorage
awsAccessKeyIdKey: aws-access-key-id
awsSecretAccessKeyKey: aws-secret-access-key
region: "eu-west-1"
minio:
enabled: false
Now you can deploy it to your cluster with Helm.
PostgreSQL
Eggplant DAI uses PostgreSQL for data storage. The Helm chart provides several options for configuring it.
Bundled PostgreSQL (Default)
By default, the Eggplant DAI Helm chart deploys PostgreSQL as a sub-chart with username and password both set to postgres
.
To override this, create a secret with credentials in:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: dai-postgres
stringData:
postgres-password: my-access-key-id
Modify your values file to update or add the following keys and apply it to your cluster with Helm:
global:
postgresql:
auth:
existingSecret: dai-postgres
keycloak:
externalDatabase:
existingSecret: dai-postgres
existingSecretPasswordKey: postgres-password
Note keycloak.externalDatabase.existingSecretPasswordKey
: by default, the Bitnami chart expects the existing secret to have the database password under the key password
, but the Bitnami PostgreSQL chart and DAI default to postgres-password
as the key. You can either overrirde the behaviour of the Keycloak chart, as above, or alternatively you could set global.postgresql.auth.secretKeys.adminPasswordKey
.
The PostgreSQL installation is provided by the Bitnami chart. You can further customise it by passing options to it under the postgresql
key in your values file. See the Bitnami documentation for available options.
If you override extraEnvVars
, you need to ensure you also set the POSTGRESQL_DATABASE
environment variable to keycloak
. This creates the Keycloak database that is configured under the keycloak
key in the default values.
Eggplant cannot support changes to the PostgreSQL configuration.
Existing PostgreSQL
If you have an existing PostgreSQL installation, or would like to use an external service like AWS RDS, you can do so.
Using the same existing secret as above, modify your values file to set the following keys:
global:
postgresql:
host: my.postgresql.host.example.com
auth:
existingSecret: dai-postgres
keycloak:
externalDatabase:
existingSecret: dai-postgres
existingSecretPasswordKey: postgres-password
host: my.postgresql.host.example.com
postgresql:
enabled: false
Note that if you use an existing PostgreSQL deployment, you also need to update the Keycloak configuration to use this.
Engine scaling
The engine component of Eggplant DAI is used for test execution and report generation. As your DAI instance becomes busier, you will need to scale this component to handle greater test volumes. We recommend using Keda to manage this scaling.
To use Keda, first install it according to upstream instructions.
Only Keda v2 is supported.
Then enable Keda by adding the following to your values file:
ai-engine:
keda:
enabled: true
If you can't use Keda for some reason, you can manually manage the number of engine replicas by adding the following to your values file, increasing it as your instance becomes busier.
ai-engine:
replicaCount: 2
Keycloak
Eggplant DAI depends on Keycloak for authentication and authorisation services. We bundle this as a sub-chart and do not currently support using your own Keycloak installation.
Backup and restore
You must regularly back up configuration and results data from your DAI installation. Data that needs to be backed up is stored in PostgreSQL (as configured for DAI and Keycloak) and in object storage.
How you backup this data will depend on how you've configured your deployment, but here we provide an example of how both can be backed up in the default installation shown at the start of this document.
Backup and restore PostgreSQL
Eggplant DAI uses several databases to store its data, therefore we recommend using pg_dumpall
in the default installation to ensure you backup all databases. If you're using a shared database instance, you need to ensure you backup the following databases:
- execution_service
- keycloak
- sut_service
- ttdb
- vam
In the example below, we execute pg_dumpall
directly in the default PostgreSQL pod. The result is then streamed to dai.dump
file on the local computer:
$ kubectl --namespace dai exec postgres-0 \
-- /bin/sh -c \
'export PGPASSWORD=$POSTGRES_PASSWORD && pg_dumpall --username postgres --clean' \
> dai.dump
The command given here includes the --clean
option. This causes pg_dumpall
to include commands to drop the databases in the dump. This makes restoring it easier, but you should know this will happen.
In reality, you would likely want to:
- compress the dump
- put it on a backup storage server
- execute on a schedule.
But the use of pg_dumpall
would still stand.
To restore the backup, you can reverse the process as follows:
$ kubectl --namespace dai exec postgres-0 \
-- /bin/sh -c \
'export PGPASSWORD=$POSTGRES_PASSWORD && psql --username postgres \
--dbname postgres \
--file -' < dai.dump
A few notes:
- We used the
--clean
option when creating the dump. This means all databases in the backup will be dropped and recreated. - We specify
--dbname postgres
. As the backup was created with--clean
, you'll get errors if you connect to one of the databases being dropped as part of the restore.
Backup and restore MinIO
Images and other assets are stored in object storage rather than the database. You must back these up in addition to the database content discussed above. A quick way to run this backup from your local machine is demonstrated below. The example below requires you to have the MinIO clienttools installed locally.
$ ROOT_USER=$( kubectl get secret dai-objectstorage -o json | jq -r '.data."root-user"' | base64 -d )
$ ROOT_PASSWORD=$( kubectl get secret dai-objectstorage -o json | jq -r '.data."root-password"' | base64 -d)
$ kubectl port-forward service/minio 9000:9000 &
$ PID=$!
$ mc alias set minio <http://localhost:9000> $ROOT_USER $ROOT_PASSWORD -api S3v4
$ mkdir backup
$ mc cp --recursive --quiet minio/ backup/
$ kill $PID
As before, it's likely you'll want to compress the backup, move it to an appropriate storage server and execute it on a schedule.
To restore the backup, you can just reverse the copy command:
$ mc mb minio/assets
$ mc mb minio/screenshots
$ mc cp --recursive --quiet backup/ minio/
This assumes you've used the default configuration to have separate assets and screenshots buckets, in which case you need to create the buckets with mc mb
before you can restore.
Upgrading
The general procedure for upgrading is the same as any Helm release:
- Backup your PostgreSQL and object storage data, depending on how you've deployed it.
- update your repositories with
helm repo update
. - fetch and modify your values as needed for the new release with
helm get values
and a text editor. - run
helm upgrade
.
Each Eggplant DAI release may have specific additional steps. So before applying this procedure, please review the notes below for the upgrade you're performing.
Upgrading DAI 6.5 to 7.0
The DAI 7.0 release includes an update to Minio that is incompatible with previous versions. If you are using the bundled Minio for object storage then it is necessary to backup the old Minio installation and restore the data post DAI upgrade:
-
Backup the existing Minio installation as described above.
-
Remove the existing Minio deployment and PVC.
kubectl delete pvc -l app.kubernetes.io/name=minio --wait=false && kubectl delete deployment -l app.kubernetes.io/name=minio
-
Review values and run
helm upgrade
. This will create a clean installation of Minio at the same time as upgrading the other DAI components. -
Restore the existing Minio data to the new Minio deployment as described above.
Upgrading DAI 6.4 to 6.5
DAI 6.5 introduces a new Helm chart that is incompatible with previous releases. The recommended upgrade procedure for this release is therefore different:
- Backup your PostgreSQL and object storage data.
- Update your repositories with
helm repo update
. - If using bundled Minio, fetch the root user and password for your Minio deployment.
- Fetch the root user and password for your Keycloak deployment.
- Fetch your existing values and translate them to the new values required.
- Uninstall your old deployment with
helm uninstall -n dai dai
. - Additionally, remove all old PVCs and jobs with
kubectl -n dai delete jobs --all && kubectl -n dai delete pvc --all
. - Install the Helm 6.5 release with the following:
helm install -n dai dai eggplant/dai --version 1.3.4 -f new-values.yaml --wait
If using bundled Minio, restore data from backup.
The exact process may vary depending on your previous deployment. Please be careful to verify backups before deleting resources and to delete the correct resources.
While we recommend reviewing the rest of the chart documentation to create a new values file, below is a mapping from keys in the pre-6.5 values file to their location in the 6.5+ values file. This is not a complete list of values, only of those that have moved. You'll still need to review the documentation for the values to ensure you set all required keys and they are set correctly.
Old Key | New Key |
---|---|
global.adminusername | keycloak-user-provisioner.adminUsers.daiAdmin.username |
global.adminEmail | keycloak-user-provisioner.adminUsers.daiAdmin.email |
global.adminPassword | keycloak-user-provisioner.adminUsers.daiAdmin.password |
global.license | global.devLicense, global.execLicense* |
externalDatbase | global.postgresql |
externalBroker | global.rabbitmq |
objectStorage | global.objectStorage |
ingress.hostnames | global.ingress.host |
ingress.tls | global.ingress.tls |
keda.enabled | ai-engine.keda.enabled |
keycloak.realm | global.keycloak.realm |
keycloak.url | global.keycloak.host |
keycloak.adminUser | global.keycloak.user |
keycloak.adminPassword | global.keycloak.password |
keycloak.smtp | keycloak-realm-provisioner.smtp |
Upgrading Eggplant DAI 6.3 to 6.4
The DAI 6.4 release includes an update of the internal version of Keycloak to version 19. To upgrade to this new version:
- Edit your yaml file to move the
keycloak.adminPassword
key tokeycloak.auth.adminPassword
- Similarly if not using the default admin username move the
keycloak.adminUser
key tokeycloak.auth.adminUser
within the yaml. - The helm upgrade process deploys a new StatefulSet which is incompatible with the existing StatefulSet. The original StatefulSet therefore needs to be deleted prior to performing the helm upgrade (note that once this step is completed the DAI instance will be inaccessible until the upgrade to 6.4 is complete):
$ kubectl delete statefulsets.app -l app.kubernetes.io/name=keycloak --namespace dai
Upgrading Eggplant DAI from version 6.2 to 6.3
- If you use KEDA, v1 is no longer supported. You must upgrade to KEDA v2 before upgrading DAI. In order to do this, make sure you remove the
ai-engine
job before upgrading KEDA.
$ kubectl -n dai delete job ai-engine
Upgrading Eggplant DAI from version 5.3 to 6.0
- You no longer need to set the service token and JWT secret. Remove these values.
- The helm chart deploys a Keycloak instance in the same namespace as the rest of the DAI components. You must, however, specify the Keycloak URL, which is set to
https://kc-<ingress-hostname>
, where<ingress-hostname>
is the parameter value that you specified in the values file.
Uninstalling
You can uninstall Eggplant DAI by either running helm uninstall
or by removing the namespace you installed it to.
If you applied any customisations to use external resources, like a PostgreSQL instance or an S3 bucket, you'll need to remove these separately.
Values
Full documentation of all the supported values in the Eggplant DAI chart.
Support
Contact Eggplant Customer Support if you require further assistance.